Goto

Collaborating Authors

 data protection authority


A Personal data Value at Risk Approach

Enriquez, Luis

arXiv.org Artificial Intelligence

What if the main data protection vulnerability is risk management? Data Protection merges three disciplines: data protection law, information security, and risk management. Nonetheless, very little research has been made on the field of data protection risk management, where subjectivity and superficiality are the dominant state of the art. Since the GDPR tells you what to do, but not how to do it, the solution for approaching GDPR compliance is still a gray zone, where the trend is using the rule of thumb. Considering that the most important goal of risk management is to reduce uncertainty in order to take informed decisions, risk management for the protection of the rights and freedoms of the data subjects cannot be disconnected from the impact materialization that data controllers and processors need to assess. This paper proposes a quantitative approach to data protection risk-based compliance from a data controllers perspective, with the aim of proposing a mindset change, where data protection impact assessments can be improved by using data protection analytics, quantitative risk analysis, and calibrating expert opinions.


OpenAI hit with another privacy complaint over ChatGPT's love of making stuff up

Engadget

OpenAI has been hit with a privacy complaint in Austria by an advocacy group called NOYB, which stands for None Of Your Business. The complaint alleges that the company's ChatGPT bot repeatedly provided incorrect information about a real individual (who for privacy reasons is not named in the complaint), as reported by Reuters. This may breach EU privacy rules. The chatbot allegedly spat out incorrect birthdate information for the individual, instead of just saying it didn't know the answer to the query. Like politicians, AI chatbots like to confidently make stuff up and hope we don't notice.


ChatGPT is once again available in Italy after a temporary ban

Engadget

OpenAI says ChatGPT is once again available in Italy after it addressed a series of conditions set out by regulators. The Garante data protection authority wanted OpenAI to resolve several issues by the end of this month in order to lift a temporary ban on the chatbot. "ChatGPT is available again to our users in Italy," OpenAI told the Associated Press in a statement. "We are excited to welcome them back, and we remain dedicated to protecting their privacy." Italian regulators blocked ChatGPT in March over concerns that the AI's training methods and chatbot violated the European Union's General Data Protection Regulation (GDPR).


ChatGPT available to users in Italy a month after temporary ban

Al Jazeera

Access to the ChatGPT chatbot has been restored in Italy after its maker OpenAI "addressed or clarified" issues raised by Italy's data protection authority, Italian authorities and OpenAI have said. Microsoft Corp-backed OpenAI took ChatGPT offline in Italy last month after the country's Data Protection Authority, also known as Garante, temporarily banned the chatbot and launched a probe into the artificial intelligence application's suspected breach of privacy rules. The Italian Data Protection Authority described its action as provisional "until ChatGPT respects privacy". The watchdog said ChatGPT developer OpenAI had no legal basis to justify "the mass collection and storage of personal data for the purpose of'training' the algorithms underlying the operation of the platform". It further referenced a data breach on March 20 when user conversations and payment information were compromised, a problem the United States firm blamed on a bug.


OpenAI's hunger for data is coming back to bite it

MIT Technology Review

In AI development, the dominant paradigm is that the more training data, the better. OpenAI's GPT-2 model had a data set consisting of 40 gigabytes of text. GPT-3, which ChatGPT is based on, was trained on 570 GB of data. OpenAI has not shared how big the data set for its latest model, GPT-4, is. But that hunger for larger models is now coming back to bite the company. In the past few weeks, several Western data protection authorities have started investigations into how OpenAI collects and processes the data powering ChatGPT.


Italian minister slams country's temporary ban on US-based AI chatbot

FOX News

Kurt'The Cyberguy' Knutson weighs in on the new artificial intelligence bot known as Chatgpt that could potentially allow students to cheat in school on'Fox & Friends Weekend.' Italy's deputy prime minister criticized the country's Data Protection Authority for implementing an immediate ban on AI chatbot ChatGPT over privacy concerns. "I find the decision of the Privacy Watchdog that forced #ChatGPT to prevent access from Italy disproportionate," Matteo Salvini, leader of a populist party known as the League Party, wrote on Instagram, according to Reuters. Salvini continued that the Data Protection Authority was "hypocritical" in temporarily banning ChatGPT and called for common sense as "privacy issues concern practically all online services," according to Reuters. Italy's Data Protection Authority, which is an independent agency that works to "protect fundamental rights and freedoms in connection with the processing of personal data," implemented a ban on OpenAI's ChatGPT program last week. OpenAI, a California-based company that is backed by Microsoft, officially disabled ChatGPT for Italian users on Friday.


Italy orders ChatGPT blocked citing data protection concerns

#artificialintelligence

Two days after an open letter called for a moratorium on the development of more powerful generative AI models so regulators can catch up with the likes of ChatGPT, Italy's data protection authority has just put out a timely reminder that some countries do have laws that already apply to cutting edge AI: it has ordered OpenAI to stop processing people's data locally with immediate effect. The Italian DPA said it's concerned that the ChatGPT maker is breaching the European Union's General Data Protection Regulation (GDPR), and is opening an investigation. Specifically, the Garante said it has issued the order to block ChatGPT over concerns OpenAI has unlawfully processed people's data as well as over the lack of any system to prevent minors from accessing the tech. The San Francisco-based company has 20 days to respond to the order, backed up by the threat of some meaty penalties if it fails to comply. It's worth noting that since OpenAI does not have a legal entity established in the EU, any data protection authority is empowered to intervene, under the GDPR, if it sees risks to local users. The GDPR applies whenever EU users' personal data is processed.


Clearview AI fined for violating the European GDPR privacy law

#artificialintelligence

In context: French authorities have imposed the maximum possible fine against Clearview AI, a biometric startup selling its controversial facial recognition technology to governments and law enforcement worldwide. The company must delete the data already acquired on French citizens or face an additional €100,000 fine per day. Clearview AI received yet another fine for its biometric profiling activities in Europe, this time for illegally collecting and using data belonging to French citizens without their knowledge. The Commission nationale de l'informatique et des libertés (CNIL), France's data protection authority, imposed a 20 million euros penalty against the American company after a lengthy investigation and an unfruitful cooperation attempt. Clearview markets facial recognition tools to companies, individuals, and law enforcement, boasting its algorithm can detect any individual with "99% accuracy" in a database with 30 billion images of faces.


Artificial Intelligence (AI) and the Risk of Bias in Recruitment Decisions

#artificialintelligence

As part of the UK data protection authority's new three-year strategy (ICO25), launched on 14 July, UK Information Commissioner John Edwards announced an investigation into the use of AI systems in recruitment. The investigation will have a particular focus on the potential for bias and discrimination stemming from the algorithms and training data underpinning AI systems used to sift recruitment applications. A key concern is that training data could be negatively impacting the employment opportunities of those from diverse backgrounds. Bias is a particular risk in AI or machine learning systems designed not to solve a problem by following a set of rules, but instead to "learn" from examples of what the solution looks like. If the data sets used to provide those examples have bias built in, then an AI system is likely to replicate and amplify that bias.


Artificial Intelligence (AI) and the Risk of Bias in Recruitment Decisions

#artificialintelligence

As part of the UK data protection authority's new three-year strategy (ICO25), launched on 14 July, UK Information Commissioner John Edwards announced an investigation into the use of AI systems in recruitment. The investigation will have a particular focus on the potential for bias and discrimination stemming from the algorithms and training data underpinning AI systems used to sift recruitment applications. A key concern is that training data could be negatively impacting the employment opportunities of those from diverse backgrounds. Bias is a particular risk in AI or machine learning systems designed not to solve a problem by following a set of rules, but instead to "learn" from examples of what the solution looks like. If the data sets used to provide those examples have bias built in, then an AI system is likely to replicate and amplify that bias.